28 research outputs found

    A toolbox for animal call recognition

    Get PDF
    Monitoring the natural environment is increasingly important as habit degradation and climate change reduce theworld’s biodiversity.We have developed software tools and applications to assist ecologists with the collection and analysis of acoustic data at large spatial and temporal scales.One of our key objectives is automated animal call recognition, and our approach has three novel attributes. First, we work with raw environmental audio, contaminated by noise and artefacts and containing calls that vary greatly in volume depending on the animal’s proximity to the microphone. Second, initial experimentation suggested that no single recognizer could dealwith the enormous variety of calls. Therefore, we developed a toolbox of generic recognizers to extract invariant features for each call type. Third, many species are cryptic and offer little data with which to train a recognizer. Many popular machine learning methods require large volumes of training and validation data and considerable time and expertise to prepare. Consequently we adopt bootstrap techniques that can be initiated with little data and refined subsequently. In this paper, we describe our recognition tools and present results for real ecological problems

    Frank and Stein : from research scientist to creative writer (a screenplay and exegesis)

    Get PDF
    The screenplay, “Perfect Blood” (Frank and Stein), is the first two-hour episode of a two-part television miniseries Frank and Stein. This creative work is a science fiction story that speculates on the future of Western nations in a world where petroleum is scarce. A major theme that has been explored in the miniseries is the tension between the advantages and dangers of scientific progress without regard to human consequences. “Perfect Blood” (Frank and Stein) was written as part of my personal creative journey, which has been the transformation from research scientist to creative writer. In the exegetical component of this thesis, I propose that a key challenge for any scientist writing science fiction is the shift from conducting empirical research in a laboratory-based situation to engaging in creative practice research. During my personal creative journey, I found that a predominant difficulty in conducting research within a creative practice-led paradigm was unleashing my creativity and personal viewpoint, practices that are frowned upon in scientific research. The aim of the exegesis is to demonstrate that the transformative process from science to art is not neat and well-structured. My personal creative journey was fraught with many ‘wrong’ turns. However, after reflecting on the experience, I realise that every varied piece of research that I undertook allowed me to progress to the next stage, the next draft of Frank and Stein. And via the disorder of the creative process, a screenplay finally emerged that was both structured and creative, which are equally essential elements in screenwriting

    Technical report : acoustic analysis of the natural environment

    Get PDF
    This technical report is concerned with one aspect of environmental monitoring—the detection and analysis of acoustic events in sound recordings of the environment. Sound recordings offer ecologists the potential advantages of cheaper and increased sampling. An acoustic event detection algorithm is introduced that outputs a compact rectangular marquee description of each event. It can disentangle superimposed events, which are a common occurrence during morning and evening choruses. Next, three uses to which acoustic event detection can be put are illustrated. These tasks have been selected because they illustrate quite different modes of analysis: (1) the detection of diffuse events caused by wind and rain, which are a frequent contaminant of recordings of the terrestrial environment; (2) the detection of bird calls using the spatial distribution of their component events; and (3) the preparation of acoustic maps for whole ecosystem analysis. This last task utilises the temporal distribution of events over a daily, monthly or yearly cycle

    A correspondence framework for surface matching algorithms

    No full text
    Computer vision tasks such as three dimensional (3D) registration, 3D modelling, and 3D object recognition are becoming more and more useful in industry, and have application such as reverse CAD engineering, and robot navigation. Each of these applications use correspondence algorithms as part of their processes. Correspondence algorithms are required to compute accurate mappings between artificial surfaces that represent actual objects or scenes. In industry, inaccurate correspondence is related to factors such as expenses in time and labour, and also safety. Therefore, it is essential to select an appropriate correspondence algorithm for a given surface matching task. However, current research in the area of surface correspondence is hampered by an abundance of applications specific algorithms, and no uniform terminology of consistent model for selecting and/or comparing algorithms. This dissertation presents a correspondence framework for surface matching algorithms. The framework is a conceptual model that is implementable. It is designed to assist in the analysis, comparison, development, and implementation of correspondence algorithms, which are essential tasks when selecting or creating an algorithm for a particular application. The primary contribution of the thesis is the correspondence framework presented as a conceptual model for surface matching algorithms. The model provides a systematic method for analysing, comparing, and developing algorithms. The dissertation demonstrates that by dividing correspondence computation into five stages: region definition, feature extraction, feature representation, local matching, and global matching, the task becomes smaller and more manageable. It also shows that the same stages of different algorithms are directly comparable. Furthermore, novel algorithms can be created by simply connecting compatible stages of different algorithms. Finally, new ideas can be synthesised by creating only the stages to be tested, without developing a while new correspondence algorithm. The secondary contribution that is outlined is the correspondence framework presented as a software design tool for surface matching algorithms. The framework is shown to reduce the complexity of implementing existing algorithms within the framework. This is done by encoding algorithms in a stage-wise procedure, whereby an algorithm is separated into the five stages of the framework. The software design tool is shown to validate the integrity of restructuring existing algorithms within it, and also provide an efficient basis for creating new algorithms. The third contribution that is made is the specification of a quality metric for algorithms comparison. The metric is used to assess the accuracy of the outcomes of a number of correspondence algorithms, which are used to match a wide variety of input surface pairs. The metric is used to demonstrate that each algorithm is application specific, and highlight the types of surfaces that can be matched by each algorithm. Thus, it is shown that algorithms that are implemented within the framework can be selected for particular surface correspondence tasks. The final contribution made is this dissertation is the expansion of the correspondence framework beyond the surface matching domain. The correspondence framework is maintained in its original form, and is used for image matching algorithms. Existing algorithms from three image matching applications are implemented and modified using the framework. It is shown how the framework provides a consistent means and uniform terminology for developing both surface and image matching algorithms. In summary, this thesis presents a correspondence framework for surface matching algorithms. The framework is general, encompassing a comprehensive set of algorithms, and flexible, expanding beyond surface matching to major image matching applications

    Scaling Acoustic Data Analysis Through Collaboration and Automation

    Get PDF
    Monitoring and assessing environmental health is becoming increasingly important as human activity and climate change place greater pressure on global biodiversity. Acoustic sensors provide the ability to collect data passively, objectively and continuously across large areas for extended periods of time. While these factors make acoustic sensors attractive as autonomous data collectors, there are significant issues associated with large-scale data manipulation and analysis. We present our current research into techniques for analysing large volumes of acoustic data effectively and efficiently. We provide an overview of a novel online acoustic environmental workbench and discuss a number of approaches to scaling analysis of acoustic data; collaboration, manual, automatic and human-in-the loop analysis

    Stillen bei Frauen mit BeeintrÀchtigungen von Körperstruktur und Körperfunktion

    No full text
    Masterthesis zur Erlangung des akademischen Grades einer „Master of Science in Pflege und Sozialwissenschaften: Case Management fĂŒr barrierefreies Leben in BaSys

    Technical report : acoustic analysis of the natural environment

    No full text
    This technical report is concerned with one aspect of environmental monitoring—the detection and analysis of acoustic events in sound recordings of the environment. Sound recordings offer ecologists the advantage of cheaper and increased sampling but make available so much data that automated analysis becomes essential. The report describes a number of tools for automated analysis of recordings, including noise removal from spectrograms, acoustic event detection, event pattern recognition, spectral peak tracking, syntactic pattern recognition applied to call syllables, and oscillation detection. These algorithms are applied to a number of animal call recognition tasks, chosen because they illustrate quite different modes of analysis: (1) the detection of diffuse events caused by wind and rain, which are frequent contaminants of recordings of the terrestrial environment; (2) the detection of bird and calls; and (3) the preparation of acoustic maps for whole ecosystem analysis. This last task utilises the temporal distribution of events over a daily, monthly or yearly cycle

    Cloud-based medical image collection database with automated annotation

    No full text
    Typical medical image annotation systems use manual annotation or complex proprietary software such as computer-assisted-diagnosis. A more objective approach is required to achieve generalised Content Based Image Retrieval (CBIR) functionality. The Automated Medical Image Collection Annotation (AMICA) toolkit described here addresses this need. A range of content analysis functions are provided to tag images and image regions. The user uploads a DICOM file to an online portal and the software finds and displays images that have similar characteristics. AMICA has been developed to run in the Microsoft cloud environment using the Windows Azure platform, to cater for the storage requirements of typical large medical image databases
    corecore